Photoelectric measurement

Non-contact three-dimensional emissivity distribution measurement method of M8 LiDAR echo
Li Ronghua, Deng Yuan, Xue Haopeng, Zhou Xinchen, Zhao Mingshuo
2024, 53(4): 20230672. doi: 10.3788/IRLA20230672
[Abstract](6) [FullText HTML] (7) [PDF 4039KB](2)
  Objective  Emissivity is an important physical quantity to characterize the radiation ability of material surface. It is an important thermal physical parameter and has very important applications in many fields. Emissivity is not the intrinsic property of an object. It is a physical quantity that is difficult to measure accurately. It is related to temperature, wavelength, angle and so on. Its measurement is more complicated. At present, most of the emissivity measurement methods are only contact measurement of a single substance. It is impossible to measure the three-dimensional distribution of emissivity of complex targets. Combined with the characteristics of LiDAR, a non-contact three-dimensional distribution measurement method of emissivity is proposed based on LiDAR echo.  Methods  Firstly, the echo intensity characteristics are analyzed based on the LiDAR transmission distance equation, the main factors affecting the LiDAR echo intensity are explored, and the target point cloud data with intensity information by line array scanning of 95% reflectivity standard diffuse reflector plate are obtained by line array LiDAR (Fig.4). The stacked multi-frame single line point cloud (Fig.5) and the three-dimensional point cloud image of radar intensity with reflective spectral characteristics is obtained (Fig.8). Secondly, the piecewise polynomial model is used to fit the relationship between distance-intensity and incident angle-intensity (Tab.4). Based on the obtained piecewise polynomial model, the echo intensity under the influence of distance and incident angle is corrected, so that the measured echo intensity under different distance and incident angle can truly reflect the reflection spectrum characteristics of the target, and the validity of the correction model is verified (Fig.15-16). Finally, based on the obtained piecewise polynomial correction model, the intensity correction of the three-dimensional point cloud image of the radar intensity of the scaled satellite model with the reflectivity true value patch is carried out. The reflectivity of the target surface is calculated by using the reflectance spectral characteristics of the corrected echo intensity. The emissivity is further deduced by the reflection method, and the three-dimensional distribution of the emissivity of the scaled satellite model is obtained (Fig.21).  Results and Discussions  The correction results show that the standard deviation of echo intensity STD under the influence of distance before and after standard diffuse reflection correction is reduced from 50.58 to 3.49 (Tab.5), and the standard deviation of echo intensity STD under the influence of incident angle is reduced from 19.25 to 3.17 (Tab.5). The coefficient of variation of echo intensity under the influence of standard diffuse reflection plate distance effect and incident angle effect is reduced from 0.267 6 and 0.343 8 to 0.042 0 and 0.041 2 (Tab.5) respectively. And the consistency of echo intensity is increased by 84.31% and 88.02% respectively (Tab.6). The average emissivity deviations of the surface patches of the three satellite models can be controlled at 3.33%, 4.84% and 4.44%, respectively (Tab.7).  Conclusions  In view of the fact that most of the current emissivity measurement methods are only contact measurement of a single substance, it is impossible to measure the three-dimensional distribution of emissivity of complex targets. Combined with the characteristics of LiDAR, a non-contact emissivity three-dimensional distribution measurement method is proposed based on M8 LiDAR echo. The band of the M8 linear array LiDAR used in this paper is 905 nm, which belongs to the near-infrared band in the infrared band, and the emissivity measurement in the medium and long infrared band cannot be involved. The reflectivity of the materials used in this paper is the true value at room temperature, and the influence of temperature on emissivity measurement is not involved. In addition, due to the differences in physical factors of different LiDARs and the inconsistent expression of the unit and numerical scale of the echo intensity, the current measurement method is only applicable to specific scanning instruments. The robustness and universality of the emissivity measurement method are worthy of further discussion. Therefore, the next step will further improve the universality of the method and introduce the influence of temperature on the emissivity measurement.
Internal parameter calibration method of line-scan camera based on 2.5D calibration fan
Zhang Xu, Mao Qingzhou, Shi Chunlin, Hu Qingwu, Jin Guang, Zhou Hao, Xie Yi
2024, 53(4): 20230670. doi: 10.3788/IRLA20230670
[Abstract](11) [FullText HTML] (4) [PDF 6023KB](2)
  Objective  Aiming at the difficulty and high cost of regular calibration of line-scan camera internal parameters in industrial production lines or integrated equipment, a calibration method of line-scan camera internal parameters based on 2.5D calibration fan is proposed. The appropriate calibration object is designed, and the internal parameter calibration model of the linear array camera is established. The linear transformation theory is used to calculate the initial value of the model parameters, and the improved Levenberg-Marquardt (L-M) algorithm is used to optimize the camera parameters. The experimental results show that the internal parameters of the linear array camera calibrated by this method have high accuracy and good consistency. The maximum re-projection error is less than 0.28 pixel, and the average RMSE is 0.112 pixel.  Methods  A specific 2.5D calibration fan was designed. The internal parameter calibration model of line-scan camera including lens distortion is constructed, which takes into account the two attitude angles of the target relative to the camera. The initial value of the model parameters is solved by the equation linear transformation method, and the improved L-M algorithm is used to accelerate the optimization of the camera parameters. The detailed calculation steps and data processing process are given, and the feasibility of the method is verified according to the simulation and measured data.  Results and Discussions   The theoretical analysis and experimental results show that the linear array camera calibration method is simple and flexible, and a large number of feature point pairs with regular distribution can be obtained. The parameter calibration accuracy is not limited by the camera movement accuracy and specific direction. In addition, when the angle between the fan-bone surface and the target surface is less than 10 °, high-precision and high-consistency camera internal parameters can be obtained. The maximum value of the feature point reprojection error is better than 0.28 pixel, the average RMSE is 0.112 pixel, and the standard deviation is only 0.014 pixel.  Conclusions  Line-scan cameras are often placed inside the equipment in a modular form with other sensors, and the regular calibration of the internal parameters of such cameras is costly and difficult. In view of this, an internal parameter calibration method of linear array camera based on 2.5D calibration fan is proposed. The 2.5D calibration fan has the advantages of both the three-dimensional measurement effect of the 3D target and the low cost and high precision of the 2D target. The number of feature points is large and the distribution is regular, which avoids the problem of easy loss of features, and the feature points and image points are easy to match. The constructed linear array camera calibration model takes into account the lens distortion error and the small angle attitude between the target surface and the image surface, so that the camera movement direction and the position of the calibration object are not strictly limited. Experiments show that the internal parameters of the camera calibrated by the fan bone with \begin{document}$ \theta $\end{document} < ±10° are the best, and the calibration results have high accuracy and good consistency. The reprojection error of 89 % feature points is less than 0.20 pixel, the maximum error is better than 0.28 pixel, and the average RMSE is 0.112 pixel. In addition, compared with the standard L-M algorithm, the improved L-M algorithm reduces the number of iterations by half without affecting the accuracy of parameter optimization.
Dual-wavelength interferometric algorithm based on spatial-temporal conjugate complex function coupling
Cheng Jinlong, Zhu Liyan, Chen Lu, Yang Zhongming, Gao Zhishan, Yuan Qun
2024, 53(4): 20230661. doi: 10.3788/IRLA20230661
[Abstract](13) [FullText HTML] (3) [PDF 2675KB](4)
  Objective  Traditional single-wavelength interferometry is not suitable to unwrap the correct phase for measuring surface with step or groove, whose depth is larger than half wavelength. Dual-wavelength interferometry (DWI) technique employs an extra wavelength to generate a longer beat-frequency synthetic wavelength (\begin{document}${\lambda }_{{\rm{s}}}$\end{document}). For synthetic wavelength is much longer than the optical working wavelength, DWI extends the measuring discontinuity limit of interferometry greatly. And DWI could achieve the simultaneous accurate measurements with large dynamic range for the multi-scale morphology characteristics parameters such as the macro profile and local morphology with step. Meanwhile, in the simultaneous dual-wavelength interferometry (SDWI), the two single-wavelength interferogram is captured simultaneously to accelerate the data collection, which is immune to vibration with the advantages of the time saving and higher efficiency. In practice, the dual-wavelength interferogram is usually captured by the monochrome sensor, which is more convenient and economical. And a generated dual-wavelength Moiré fringe pattern appears as the simple incoherent additive superposition of the two corresponding single-wavelength interferogram. The low beat-frequency envelope of the generated fringe pattern indirectly represents the needed synthetic-wavelength information, whose direct extraction is rather arduous. For this purpose, we present a dual-wavelength interferometric algorithm combining with spatial-temporal conjugate complex functions coupling and double phase shift strategy.  Methods  The method needs to acquire multiple groups of phase-shift dual-wavelength interferogram, and each group consists of four continuous interferogram. The phase shift step among the four frames in each group is required as π/2 at one single wavelength, while π/2 at synthetic wavelength between the adjacent groups by the designed double phase shift strategy (Fig.2). And in dual-wavelength squeezing interferometry for each group, the temporal phase shift in each group is converted into spatial carrier in the generated dual-wavelength STF. Therefore, the +1-order spectral lobes for the two wavelengths could be easily separated from others and filtered in the Fourier spectrum of the generated dual-wavelength STF without extra spatial carrier and elimination of background. After the appropriate band-pass filter and inverse Fourier transform, the single-wavelength interferometric complex functions are obtained. Subsequently, when the conjugate single-wavelength interferometric complex functions are multiplied, the synthetic-wavelength interferometric fringe pattern could be extracted directly (Fig.1). The obtained synthetic-wavelength interferogram from each group with π/2 phase-shift step at \begin{document}${\lambda }_{{\rm{s}}}$\end{document} could be demodulated to retrieve the final synthetic-wavelength phase by the conventional phase-shift algorithm.  Results and Discussions  Simulations verify that the proposed method has a lower linear carrier requisition than the spatial-domain Fourier transform demodulation theory, which is merely about 0.077 times of latter numerically, even the phase-shift deviation at different wavelength exists (Fig.4). Besides, the feasibility and applicability of the proposed method are verified using simulation and experimental results. Numerical simulation indicates that the demodulated error is better than PV of 0.556 9 nm and RMS of 0.089 7 nm even when the fringe number is 1 at \begin{document}${\lambda }_{{\rm{s}}}$\end{document} (Fig.3). In addition, for the test step in the experiment, the results have validated the effectiveness of the proposed method for the interferogram with lower linear carrier. And the step height deviations for the proposed method are better than 0.94% for the step with the height of 7.8 μm even for one fringe at \begin{document}${\lambda }_{{\rm{s}}}$\end{document} (Fig.9).  Conclusions  To extract and retrieve the lower frequency synthetic-wavelength interferometric fringe form dual-wavelength Moiré fringe, we present a dual-wavelength interferometric algorithm combining with spatial-temporal conjugate complex functions coupling and double phase shift strategy. Several groups of phase-shift dual-wavelength interferogram are acquired with every contiguous four frames in each group. The temporal phase shift among each group is converted into spatial carrier for the spectral separation with lower spatial carrier. When the obtained spatial-temporal conjugate complex functions are coupling by multiplication, one synthetic-wavelength interferogram could be extracted for each group directly without the introduction of other wavelength interferometric information. For the designed π/2 phase shift at synthetic wavelength between the adjacent groups, the extracted synthetic-wavelength interferogram from every group is demodulated by conventional phase-shift algorithm directly. The proposed method has a lower carrier requisition than the traditional spatial-domain Fourier demodulation theory, which is merely about 0.077 times of the former numerically, even when the phase-shift deviation for different wavelengths exists.
Deep hole roundness measurement method of circular structured light system
Chen Zhenya, Ma Zhuoqiang, Li Xiang, Shen Xingquan, Yang Shangjin, Miao Hongbin, Lu Chuanjie
2024, 53(4): 20230660. doi: 10.3788/IRLA20230660
[Abstract](9) [FullText HTML] (1) [PDF 4635KB](2)
  Objective  The circular structured light based measurement system can generate circular structured light by reflecting a laser beam through a conical reflector, and utilizing laser triangulation and close-view photogrammetry algorithms to calculate three-dimensional coordinates. This kind of measurement system has been widely studied by researchers due to its advantages such as high flexibility, high accuracy and simple structure. The current circular structured light system has certain problems in measuring roundness. It is difficult for the laser, the camera and the deep hole to be measured to be parallel or coaxial, which leads to the inability to measure the accurate circular cross-section. A measurement system based on circular structured light is constructed, the mechanism of systematic error is analyzed and a method based on the circular structured light system to measure circularity is proposed, which provides some compensation for systematic errors.  Methods  The mechanism that generates systematic errors in the measurement of roundness by the circular structured light system is analyzed (Fig.3). A high-precision electric linear slide is used to move the deep hole parts to be measured to complete the full-field measurement, and a high-resolution point cloud is obtained and fit the axis. Through the Rodrigues formula, the inner surface point cloud rigid body will be transformed to the point cloud axis and z-axis parallel to the location of the point cloud axis, searching for the z coordinate near the point as a cross-section of the roundness of the assessment point (Fig.4). The roundness evaluation is completed by the grid search algorithm (Fig.5).  Results and Discussions  The compensation by the proposed method works well in roundness measurement experiments, in which the measurement uncertainty is 4.78 µm (Tab.3). For the circular structured light measurement system, a long rod can be assembled after the laser to increase the axial measurement range of the system. For the proposed circular structured light roundness measurement method, the resolution of the 3D point cloud can be improved by reducing the step length of the motorized linear slide to further improve the measurement accuracy.  Conclusions  A three-dimensional measurement system of circular structured light is built, the error generated when the circular structured light system measures roundness is analyzed, and a method based on the circular structured light system to accurately measure roundness is proposed. It is verified that the use of the method to measure roundness has a good compensating effect through the measurement experiments.
3D shape measurement of transparent objects by phase deflection based on multi-frequency phase shift
Su Chaoyang, Wang Zhangying, Ni Yubo, Gao Nan, Meng Zhaozong, Yang Zeqing, Zhang Guofeng, Yin Wei, Zhao Hongwei, Zhang Zonghua
2024, 53(4): 20230702. doi: 10.3788/IRLA20230702
[Abstract](5) [FullText HTML] (2) [PDF 5345KB](3)
  Objective  Phase Measuring Deflectometry (PMD) in optical 3D measurement is widely used in optical surface measurement, rapid detection and other fields because of its advantages of rapid speed, high precision, stability and anti-interference. Due to the refraction and reflection of the upper and lower surfaces of transparent objects, the camera collects mixed fringes with different surface reflections. It is difficult for traditional PMD to measure them effectively in three dimensions. The existing phase extraction methods require high accuracy of the initial phase and need to collect a large number of fringe patterns. In order to solve this problem, a PMD method based on multi-frequency phase-shifting is proposed to measure the 3D morphology of transparent objects surface.  Methods  This study proposes a multi-frequency phase-shifting-based PMD for measuring the 3D surface morphology of transparent objects. Firstly, the display screen shows a variety of sinusoidal fringes with different frequencies combined with multi-step phase shift, and the camera collects the mixed fringes reflected and superimposed on the surface of the object from another angle. Subsequently, the mixed fringes are separated iteratively by the least square method, and the wrapped phase of the upper and lower surfaces are obtained, and the unwrapping phase is obtained by the optimum three-fringe selection method in temporal phase unwrapping. Then, the relationship between phase and gradient is determined by gradient calibration, and the gradient of the measured object relative to the reference plane is determined according to the unwrapped phase. Finally, the gradient integral is used to restore the 3D morphology of the transparent object surface.  Results and Discussions  In order to prove the effectiveness of the proposed method, a glass plate with a thickness of 3 mm and a plano-convex lens with a radius of curvature of 515.09 mm were measured, and a comparative experiment was conducted between the multi-frequency phase-shifting method and the multi-frequency method to verify the effectiveness of the proposed phase separation method (Fig.14). The experimental results show that the proposed method can effectively measure the 3D morphology of transparent objects surface (Fig.16, Fig.19). Compared with the existing methods, the average error of measuring the upper surface of transparent glass plate is reduced from 32.4 μm to 5.1 μm (Tab.2). This method can effectively avoid the influence caused by the large deviation of initial phase value, shorten the calculation time, and is suitable for 3D shape measurement of transparent objects with different shapes.  Conclusions  A method for measuring the 3D morphology of transparent objects surface based on multiple frequencies is proposed. This method makes up for the deficiency of phase separation in traditional phase deflection measurement and can effectively measure the 3D morphology of transparent objects surface. Through the combination of different frequency fringes and multi-step phase-shifting, the upper and lower surfaces of transparent objects are separated by using the least square method for multiple iterations, which reduces the accuracy requirements of initial phase values, improves the numerical stability, is easy to realize phase convergence operation and shortens the calculation time. Compared with the traditional methods, it can be concluded that the proposed method of measuring the 3D topography of transparent objects based on multiple frequencies improves not only the realizability and numerical stability of iterative algorithm, but also the accuracy of topography measurement.
Method for measuring laser damage threshold of optical thin film elements based on quantitative damage evaluation
Xin Lei, Yang Zhongming, Meng Jun, Liu Zhaojun
2024, 53(3): 20230614. doi: 10.3788/IRLA20230614
[Abstract](35) [FullText HTML] (3) [PDF 2616KB](9)
  Objective  Optical thin film components play a critical role in high-power lasers, and their ability to withstand laser-induced damage is crucial for the overall performance of the laser systems. Accurate measurement of the laser-induced damage threshold (LIDT) of thin film components is of great significance for improving the lifetime and output efficiency of lasers. However, the traditional test method for laser LIDT, based on the scheme outlined in GB/T 16601, involves evaluating the threshold value through damage probability, which requires numerous repetitive experiments and is both cumbersome and time-consuming. Moreover, the evaluation based on whether the damage occurs on the film surface introduces certain errors. The stability and reliability of the damage are influenced by factors such as laser stability, environmental disturbances, coating processes, and internal defects, which cannot be eliminated and affect the measurement results and accuracy. Additionally, probability statistics require testing many samples, resulting in high capital costs. Therefore, it is necessary to develop a fast and efficient method for measuring LIDT.  Methods  In this paper a novel method of quantitative evaluation of laser-induced damage degree (QELDD) is presented, for quantitatively assessing the degree of laser-induced damage in thin film components. The method involves analyzing the quantification of laser-induced damage at different energy densities and evaluating the LID threshold (LIDT) through damage trend fitting. To accurately quantify the laser-damaged area, super-resolution white-light interferometric measurement is employed, which ensures nanoscale measurement accuracy. Simulation results demonstrate that the proposed method allows for the three-dimensional reconstruction of nanoscale damage defects with a reconstruction volume error of less than 0.01%. Experimental samples, including laser resonator mirrors and window plates, were measured using this method, without repeating the laser damage experiments. The results were found to be consistent with those obtained using the S-on-1 method, with a deviation not exceeding 0.5 J/cm2. The standard deviations of the measurement results were 0.361 J/cm2 and 0.064 J/cm2, respectively.  Results and Discussions  In the simulation results of the laser damage structure model on the surface of the test element, the optimization algorithm achieved a relative error of less than 0.01% for the three-dimensional measurement results (Fig.7). And the reconstruction deviation value is in nanometers order (Fig.8). In the experiment, samples of two laser components are used. The measurement results for the laser resonator mirror are shown (Fig.10, Tab.1). The standard deviation of multiple measurement results is 0.361 J/cm2, and the difference from the S-on-1 result is less than 0.5 J/cm2. Similarly, the measurement results for the window slice are shown (Fig.11, Tab.2). The standard deviation of the multiple measurement results is 0.064 J/cm2, and the difference from the S-on-1 result is less than 0.3 J/cm2. The proposed QELDD method is based on single irradiation results of a single sample with different energy densities, eliminating the need for repetitive testing on multiple samples. This ensures good stability and accuracy while maintaining efficiency.  Conclusions  In this paper, a new laser damage threshold measurement method for optical films based on quantitative evaluation of the damage degree is proposed. The laser damage area is characterized and quantified with high precision using image super-resolution white light microscopic interferometry. The structural characteristics of the laser damage are also summarized. Based on the quantitative parameters of the laser damage degree, we fit and calculate the laser damage threshold. In the experimental setup, a 1 064 nm laser damage system is utilized, and the laser resonator mirror and window plate are selected as samples for testing. The standard deviations of the two sample results are 0.361 J/cm2 and 0.064 J/cm2, respectively. When compared with the results obtained using the S-on-1 method, the deviation of the measured results does not exceed 0.5 J/cm2, indicating good stability and accuracy of the proposed method. During the quantification and characterization of the laser damage area, the QELDD method effectively distinguishes between valid and invalid damage points based on the damage characteristics, thereby eliminating the influence of invalid damage on the result fitting, and improving measurement efficiency. This method introduces a new approach to LIDT measurement, facilitates further research on the laser damage mechanism of optical components, and provides a theoretical basis for enhancing the manufacturing and coating processes of optical components.
Monocular spatial attitude measurement method guided by two dimensional active pose
Liu Feng, Guo Yinghua, Wang Lin, Gao Peipei, Zhang Yuetong
2024, 53(2): 20211026. doi: 10.3788/IRLA20211026
[Abstract](957) [FullText HTML] (234) [PDF 2175KB](219)
  Objective  Monocular vision measurement technology has the advantages of simple structure, low cost, convenient and flexible operation, and there are two types of monocular vision measurement technology in general. One is the combination of monocular camera and measured object, but it needs to design a suitable cooperative target, which has certain limitations. The other is the combination of monocular camera and active sensor, but the adjustment or calibration process of the pose relationship between the camera and the active sensor is more complicated. Aiming at how to quickly measure the pose of space objects, this paper studies a monocular visual spatial pose measurement method based on two-dimensional active pose guidance. This method only requires a camera and a precision two-dimensional carrier to collect one image before and after the carrier rotates, which can complete the rapid attitude measurement of space objects. The attitude measurement method has the advantages of low cost, simple operation and large measuring range, which is less dependent on equipment.  Methods  A monocular attitude measurement system composed of monocular camera, precision two-dimensional platform and measured object is established. And an attitude measurement model of monocular camera, precision two-dimensional platform and tilt meter is designed. A precision checkerboard image and two angles of the two-dimensional platform under different image positions were taken by the camera for multiple times to carry out joint visual calibration of the camera and the two-dimensional platform (Fig.2). The pose relationship between the camera and the platform was obtained, and the pose relationship between the checkerboard and the initial camera coordinate system was calculated. Based on the coordinate system of the geodetic inclinometer, the pose relationship between the inclinometer and the attitude measuring system was calibrated according to the coordinate system relationship between the inclinometer and the checker (Fig.3), and the measured values were converted to the coordinate system of the inclinometer, realizing the rapid measurement of monocular vision.  Results and Discussions  A monocular visual spatial pose measurement method based on 2D active pose guidance is studied. Through the acquisition of precision checkerboard images for many times, the pose relationship between the camera and the two-dimensional platform, the inclinometer and the measuring attitude system was obtained, and the calibration errors of pitch angle and roll angle were both < 0.31° (Fig.7). Taking the checkerboard as the measured object, combined with the calibrated parameters, the measurement error is the largest when the pitch angle is about 15°, and the measurement error is 0.82°. When the roll angle is about −15°, the maximum measurement error is −0.43° (Fig.10).  Conclusions  In this paper, a monocular visual spatial pose measurement method based on 2D active pose guidance is studied, and the attitude measurement model of monocular camera, precision 2D pedestal and inclinometer is established. This method uses only one camera, and does not need to consider the baseline distance under binocular setting. Moreover, this method can realize the rapid measurement of the object's attitude after calibration, and realize the measurement of the object's attitude under fixed-axis dual-angle photography. The experimental results show that the proposed method can be used to measure the attitude of space objects quickly.
Displacement measurement error of grating interferometer based on vector diffraction theory
Lei Lihua, Zhang Yujie, Fu Yunxia
2024, 53(2): 20230536. doi: 10.3788/IRLA20230536
[Abstract](90) [FullText HTML] (27) [PDF 1629KB](27)
  Objective  The grating interferometer displacement measurement system, as one of the most precise measuring instruments in the measurement field, is not able to ensure the perfect assembly of the grating's attitude position during the measurement process, which makes the deviation between the grating's grating vector direction and the motion vector direction, leading to the periodic nonlinear error in the displacement measurement results. In previous studies, the assembly error of grating displacement measurement system is usually unfolded in the scalar state, ignoring the effect of incident azimuth angle on the system. Based on the grating vector diffraction theory, this paper analyses the attitude position error between the grating and the displacement stage as well as the readhead that occurs during the displacement measurement of the grating interferometer, and illustrates the possible displacement measurement error by analysing the amount of angular deviation of the three dimensions, so as to provide the theoretical basis for the improvement of the subsequent device.  Methods  Ideally, the displacement measurement of a grating interferometer is based on the period of the core component, the grating. But due to the non-ideal assembly of the grating, the displacement stage, the readhead, the optics, and other system modules, there will be geometric errors in the system. The non-ideal assembly of the grating and the displacement stage, as well as the non-ideal assembly of the grating and the readhead, are the main factors leading to the geometrical error of the system. In this paper, we analyse the attitude position error between grating, displacement stage and readhead which occurs during the displacement measurement of grating interferometer, by establishing displacement coordinate system OXYZ and grating coordinate system OX'Y'Z', and referring to the attitude representation method of aircraft in the field of inertial navigation. We set the roll, pitch and yaw angles of one-dimensional grating to be α, β and γ respectively, which are common in describing the assembly state of the 1D grating relative to the translation stage. By analysing the amount of angular deviation in the three dimensions based on the grating vector diffraction theory, the possible displacement measurement errors are analysed and illustrated.  Results and Discussions  The results of the analyses of the grating assembly errors show that the geometrical errors caused by the non-ideal assembly of the metrology grating are mainly due to the rotational error angles β and γ around the Y' and Z' axes, while the rotation of the grating around the X' axes does not cause any additional measurement errors. It can be seen from the error expressions that error angles β and γ have the same effect on the measurement error. When analysing the readhead assembly error, it was found that the biggest difference between this and the encoder assembly error is that the readhead assembly error causes the system not to satisfy the Littrow structure, further complicating the problem. However, because of this, it explores a more general conclusion based on the generalised one-dimensional grating equations in this paper, from which the relationship between the systematic measurement error and the three error angles α, β and γ and the angles θ1, θ2, Ψ1, Ψ2 affecting the relative states of the incident P-light and Q-light is discussed respectively.  Conclusions  This paper analyses the measurement errors caused by clamping problems when using the grating displacement measurement system from two aspects of grating assembly errors and readhead assembly errors, and provides an analytical description of the possible displacement measurement errors. In the grating interferometer displacement measurement system, the analysis based on the grating vector diffraction theory shows that, when the Littrow incidence structure is satisfied, among the roll angle, pitch angle and yaw angle, the roll angle has no effect on the measurement results, and the expressions for the effects of pitch angle and yaw angle on the displacement measurement results are derived; When the Littrow incidence structure is not satisfied, the oblique incidence of laser light increases the azimuth angle. When the Littrow incidence structure is not satisfied, the laser will increase the azimuth angle after oblique incidence. According to the generalised one-dimensional grating equation, a more general conclusion in the presence of azimuth angle is deduced, which provides a theoretical basis for the subsequent improvement of the device.
Accuracy analysis of a three-dimensional angle measurement sensor based on dual PSDs
Zhao Wenhe, Bai Yangyang, Wang Jinkai, Zhang Lizhong
2024, 53(2): 20230543. doi: 10.3788/IRLA20230543
[Abstract](128) [FullText HTML] (28) [PDF 3837KB](31)
  Objective  In systems such as airborne photoelectric turntables and multi degree of freedom swing tables, three-dimensional angle measurement is often required. The methods of angle measurement are divided into contact measurement and non-contact measurement, and different measurement methods need to be selected based on actual application scenarios. For some flexible supports, parallel support platforms, and uncertain rotation axes between the moving object and the base, non-contact measurement methods need to be considered. At present, the common non-contact three-axis angle measurement schemes are complex and occupy a large space, which cannot meet the volume and weight requirements of airborne and spaceborne payload. Therefore, it is necessary to develop a non-contact three-axis angle measurement method with a simple structure and small footprint to meet the needs of different usage environments. Therefore, a non-contact three-dimensional angle measurement system based on two position sensitive detectors (PSD) has been proposed.   Methods  A three-axis angle measurement system based on dual PSD has been established. The system mainly consists of two parts of an autocollimation measurement unit and a double-sided reflection wedge (Fig.2). The autocollimation measurement unit includes a light source, PSD1, PSD2, autocollimation lens, and subsequent processing circuits. The light beam emitted by the light source converges into parallel light through a collimating lens. And PSD1 and PSD2 receive the light spot converged by the reflected beam and perform signal processing calculations through a processing circuit. The double-sided reflective wedge is designed with a semi-reflective and semi-transparent front surface and a fully reflective rear surface. Its function is to disperse the incident collimated parallel light into two beams and reflect them back into the self-collimating lens, which converges onto the target surfaces of two PSDs to form a light spot. According to the principle of angle measurement, the calibration method of two PSDs is designed to compensate for welding errors, and the FIR filtering algorithm is used to filter the simulated collected signal to improve accuracy.   Results and Discussions   A three-axis angle measurement system based on dual PSD has been designed, and a calibration experimental system (Fig.5) has been established to calibrate the relative position relationship between two PSDs. The welding error of the relative positions of the two PSDs is compensated through the rotation matrix and translation matrix, and the compensation result is great. A 34th-order FIR filter was designed and simulated, and the experimental results show that the designed filter has a good filtering effect on the actual collected noise signals. The filter is applied to the actual processing MCU for experiments, and the phase frequency response characteristics of the selected filter are analyzed. The test results show that the response bandwidth of the filter is 1.31 kHz, which can effectively filter out high-frequency noise signals in the analog voltage signal. The angle measurement experimental system (Fig.13) has been established, and the three-axis angle measurement function of the system has been verified. The system also has high accuracy.   Conclusions  A non-contact three-axis angle measurement system based on dual PSD is designed. This system has advantages such as simple structure, small size, high accuracy, large measurement range, high bandwidth, non-contact, and insensitivity to axial translation. The rotation matrix, translation matrix, and the designed 34th-order FIR filter obtained from calibrating two PSDs are coded and written into the STM32F4 series microcontroller, and the filter delay is approximately 525 μs, which is within an acceptable range. The processing circuit and selected devices designed according to the actual requirements of the project have been experimentally verified. Within a measurement range of ± 2°, the accuracy of yaw angle measurement reaches 0.006°, pitch angle measurement accuracy reaches 0.009°, and roll angle measurement accuracy reaches 0.021°. The overall autocollimation measurement unit weighs 230 g and has a size of 50 mm × 50 mm × 50 mm square box. The response frequency of the measurement system can reach 1.15 kHz. This system can measure the three-axis angle in real-time at high speed with high accuracy and small volume, and is suitable for various engineering applications, providing stable and high-speed three-axis angle measurement solutions for airborne, spaceborne, and other conditions.
Calibration method of fisheye camera for high-precision collimation measurement
Xiong Kun, He Xuran, Wang Chunxi, Li Jiabin, Yang Changhao
2024, 53(2): 20230549. doi: 10.3788/IRLA20230549
[Abstract](96) [FullText HTML] (29) [PDF 1793KB](33)
  Objective  Collimation measurement is one of the most widely used precision angle measurement and attitude measurement methods. By imaging the known reference target at infinity, the accurate angular relationship between the measured object and the reference target can be obtained. The measurement results have the advantages of high accuracy and high repeatability. Photoelectric autocollimator, electronic total station, theodolite and other measuring and calibration instruments all take collimation measurement as their main measurement principle. Due to the limitation of the calibration accuracy of the large-field-of-view and high-distortion optical system, the camera field of view used in precision collimation measurement is usually small, so there are great limitations in the application of large range angle measurement. Fisheye camera has the advantages of large field of view, small volume and light weight, so it should have a broad development prospect in the field of measurement and calibration. However, due to the large field of view and large distortion of the fisheye camera, there is a complex nonlinearity in the camera imaging process, and the asymmetry in lens processing has a more severe impact on the imaging model parameters. For this reason, a fisheye camera calibration method for high precision collimation measurement is proposed in this paper.   Methods  A two-step fisheye camera calibration method for collimation measurement is proposed in this paper, which includes radial rough calibration based on interpolation and fine calibration based on grid compensation. This method uses interpolation instead of constructing camera model, which can effectively avoid the system error caused by inaccurate model and unreasonable parameter setting, and can restrain the asymmetry of lens processing and the deviation caused by optical system adjustment to a certain extent. Different from the commonly used performance indicators such as peak signal-to-noise ratio (PSNR) and structural similarity (SSIM), the mean reprojection error (MRE) selected in this paper can more effectively measure the camera calibration results under the condition of collimation measurement.   Results and Discussions   According to the classical model of fisheye camera, four different virtual fisheye cameras are constructed for simulation experiments (Tab.1). The simulation result shows that the calibration effect of this method on the four virtual fisheye camera models is better than the calibration method proposed recently (Fig.8), and the calibration uncertainty can be increased by 82.63% compared with the traditional method. Then, a fisheye camera calibration prototype based on embedded platform is designed (Tab.2). The calibration experimental results of the prototype shows that the proposed method can effectively calibrate the real fisheye camera for collimation measurement (Fig.10). After the calibration method is applied to the real prototype built in this paper, the uncertainty of the solution of the incident vector of the prototype can be raised to arcseconds (Fig.11).   Conclusions  A fisheye camera calibration method for high-precision collimation measurement is proposed. In the method, calibration process of fisheye camera for collimation measurement is divided into two parts of radial calibration and grid calibration. Firstly, two kinds of calibration sample points are collected with the help of high-precision turntable and collimator. Then the rough construction of the imaging model is completed by radial calibration. Finally, grid calibration is used to eliminate the error caused by the non-coincidence of rotation axis and optical axis in radial calibration, and further improve the calibration accuracy. Through simulation comparison experiments and prototype verification experiments, it is proved that this method has high calibration accuracy. Moreover, this method can be applied to the high-precision calibration of all kinds of real fisheye cameras for collimation measurement, and can provide technical support for the development of collimation measurement in the future.
Research on phase unwrapping technology based on improved U-Net network
Xu Ruishu, Luo Xiaonan, Shen Yaoqiong, Guo Chuangwei, Zhang Wentao, Guan Yuqing, Fu Yunxia, Lei Lihua
2024, 53(2): 20230564. doi: 10.3788/IRLA20230564
[Abstract](117) [FullText HTML] (30) [PDF 4321KB](44)
  Objective  Objective Phase Measurement Deflectometry (PMD) is widely employed in free-form surface transmission wavefront detection due to its simplicity, high accuracy, and broad detection range. Achieving high-precision phase acquisition is a critical step in the measurement and detection process. The phase unwrapping task, crucial in optics, plays a pivotal role in optical interferometry, magnetic resonance imaging, fringe projection profilometry (FPP), and other fields [1-4]. The challenge lies in recovering a continuously varying true phase signal from the observed wrapped phase signal within the range of [−π, π). While the ideal phase unrolling involves adding or subtracting 2π at each pixel based on the phase difference between adjacent pixels, practical applications face challenges such as noise and phase discontinuity, leading to poles in the wrapped phase [5]. These poles result in accumulated computational errors during the unwrapping process, causing phase unwrapping failures. Various methods are employed to unwrap and obtain the real phase distribution. To address these challenges, this paper proposes a phase unwrapping algorithm based on an improved U-Net network.  Methods  During the model training process, a composite loss function is defined to train the network based on the specific problem of spatial phase unwrapping. To address these challenges, this paper proposes a phase unwrapping algorithm based on an improved U-Net network. This algorithm utilizes U-Net as the basic network, integrates the CBiLSTM module for modeling time series, introduces an attention mechanism for enhanced generalization, and explores optimized loss functions. The proposed network model is validated through simulated and real datasets, showcasing its outstanding performance under noise, discontinuity, and aliasing conditions.The introduction of the attention mechanism enables better capture of global spatial relationships, while CBiLSTM effectively captures and stores long-term dependencies through memory unit structures. Memory units selectively remember and forget parts of the input signal information, enhancing their ability to handle long sequence data modeling tasks. The paper defines a composite loss function tailored to the spatial phase unwrapping problem during the model training process.Comparative experiments between the proposed network and classic models, such as U-Net [20], Res-UNet [21], and methods by Wang [13] and Perera et al. [19], demonstrate the robustness of the proposed network under severe noise and discontinuities. Additionally, it showcases computational efficiency in performing spatial phase unwrapping tasks.  Results and Discussions  Fig.10 shows the comparison between the predicted absolute phase and the real phase output by the wrapped phase after training the network model proposed in this article. Through the construction of the encoder-decoder model, the introduction of the CBiLSTM module and the attention mechanism module, and the composite The definition of the loss function, after comparing with other models, verifies the improvement in accuracy and reduction in training of the network model proposed in this article in the three situations mentioned above. Through simulation experiments and verification, by enhancing the deep learning model's ability to pay attention to key phase information, the network model proposed in this article can improve the accuracy and robustness of phase unwrapping, and promote further development in fields such as optical measurement and phase imaging.  Conclusions  This paper addresses the challenge of wrapped phase unwrapping by introducing a novel convolutional architecture framed as a regression problem. The proposed network incorporates several enhancements within the encoder-decoder framework, notably featuring a CBiLSTM module and a soft attention mechanism. Comparative analyses with existing phase unwrapping methods demonstrate the network's remarkable performance in achieving precise phase unwrapping, even in severe noise, discontinuities, and aliasing. Notably, the network showcases exceptional unwrapping capabilities without necessitating extensive training on large datasets. Moreover, it exhibits significantly reduced computational time, rendering it well-suited for tasks requiring accuracy and expeditious phase unwrapping.Validation experiments conducted on real laboratory datasets further affirm the outstanding performance of the proposed network. The introduced model empowers phase unwrapping tasks under challenging conditions, such as severe noise, discontinuities, and aliasing, surpassing the limitations of traditional methods. Comparative assessments with other deep learning models reveal a normalized root mean square error (NRMSE) as low as 0.75%. The advancement in unwrapped phase technology holds substantial significance for optical free-form surface detection, contributing to enhanced measurement accuracy, precise control of optical parameters, optimization of optical design, and quality assurance in optical manufacturing and detection processes.
Research on onboard radiation calibration scheme based on pixe-level adaptive gain imaging system
Li Ze, Wei Jun, Huang Xiaoxian, Tang Yuyu
2024, 53(2): 20230561. doi: 10.3788/IRLA20230561
[Abstract](66) [FullText HTML] (11) [PDF 4215KB](10)
Test and analysis of vibration characters of damper in EO platform of UAV
Wang Zichen, Wang Donghe, Zhu Wei
2024, 53(1): 20230432. doi: 10.3788/IRLA20230432
[Abstract](80) [FullText HTML] (18) [PDF 1723KB](31)
  Objective  Absorber is a very commonly used component in UAV OE platform, because it can effectively absorb the vibration caused by the UAV, thereby enabling the photoelectric platform to obtain more stable and clear image or video. In the recent years, many scholars focus their attention on many kinds of damper in order to further improve the imaging quality of the OE platform. Non-angular displacement damper is designed for optimizing the performance of platform payload, but its practicality is limited by large volume and heavy weight. So traditional damping absorber is still the most widely used damper in OE platform at present. A lot of research have been conducted about the characters of damping absorber, but quantitative analysis to address the impact of vibration on optoelectronic platforms and how to optimize the installation layout of OE platform with damping absorber has not been carried out.   Methods  According to the characters of airborne OE payload, this paper analyzes and tests the angular characteristics of damping absorber that is related to the performance of airborne payload. First, the self-character of damper of EO payload is introduced. Second, the overall performance model of EO payload are established. Meanwhile, the modeling of motion characteristics is established and the factors that may affect the ability of airborne payload are analyzed during the installation layout of damping absorber. Many factors, such as stiffness, damping coefficient, installment-center-distance and the mass of payload, are promoted which determine the results of angular and linear displacement.   Results and Discussions  The results of simulation analysis show that the displacement of damper mainly occurs in the vertical direction instead of horizontal direction, and the displacement in horizontal direction is only 2% in the vertical direction within the effective stroke of damper when the distance of rotation center R is set as 200 mm, 250 mm and 300 mm. The portable test system is mounted using a vibration table, an UAV OE platform with four damping absorbers and an auto-collimator. All the dampers, with the same stiffness and damping coefficient, are fixed between vibration table and UAV OE platform, and auto-collimator with target is used for testing the impact of vibration on payload imaging quality. Four images are captured by OE payload when the installment-center-distance is set as 350 mm, 300 mm, 200 mm and 100 mm respectively, and the vibration table is under 5 Hz low-frequency disturbances. It is not difficult to find that with the increase of installment-center-distance, the image quality decreases significantly. So, we should increase installation spacing of diagonal dampers as much as possible in application.   Conclusions  According to the characters of OE payload, we researched the characteristics, motion modeling, and testing applications of UAV OE platform and its related damping absorber deeply. On the basis of introducing the working principles of damping absorber and OE platform, a motion characteristic model of the platform with damping absorber is established. At the same time, not only the theoretical model is simulated, but also the test results are conducted in laboratory. All the results indicate that the model can reflect the impact of damping absorber on the overall performance of UAV optoelectronic platform basically, and the results of our research in this paper are very helpful in optimizing the design, installation, and practical engineering applications of UAV platform.
Research on fine characterization technology of key parameters of line width of Si/SiO2 multilayer film
Chu Xiaoyao, Shen Yaoqiong, Liu Liqin, Zou Wenzhe, Guan Yuqing, Guo Chuangwei, Zhang Yujie, Liang Lijie, Kong Ming, Lei Lihua
2024, 53(1): 20230475. doi: 10.3788/IRLA20230475
[Abstract](83) [FullText HTML] (23) [PDF 3488KB](16)
  Objective  As the key parameters of line width, line edge roughness (LER) and line width roughness (LWR) are important indicators of the quality of line width standard samples. The accuracy of LER and LWR is important for characterizing the reliability and uniformity of line width standard materials. Inspection is very important. Through the measurement and characterization of LER and LWR, the quality label technology level of line width standard samples can be effectively evaluated. Due to the problem of magnification in the measurement method of SEM, the measurement and characterization of LER and LWR have trays. Therefore, before using SEM to measure the line width, it is necessary to adjust the magnification of SEM with standard substances in advance.   Methods  With the self-traceable grating reference material as the standard of mass transmission (Fig.2), SEM is used to scan the self-traceable grating reference material, and the grating period measurement value of the self-traceable grating is obtained (Fig.3). It is compared with the actual grating period value, and the SEM calibration factor is obtained to realize the direct traceability and magnification calibration of the scanning electron microscope. The calibrated SEM is used to measure the different values ​​of the multilayer film line width standard samples in different areas and different magnifications. The image processing technology is used to determine the position of the line edge and the average line edge based on the least squares fitting method. The root mean square roughness of the amplitude quantization parameter is calculated for LER and LWR (Fig.4).   Results and Discussions   The calibration of different magnifications of SEM is realized, and the calibration factors under different magnifications are obtained, which ensures the accuracy and traceability of the measurement results and shortens the traceability chain. The measurement results of line widths of different sizes are basically the same at different positions and different magnifications (Tab.2, Fig.8), the fluctuation range of line edge roughness is relatively small, the measured values ​​are relatively consistent, and the change of line width is small (Tab.3, Fig.9); It shows that the edge of the line width sample is relatively smooth, the line width distribution is relatively uniform, and has good uniformity and consistency, which shows that the Si/SiO2 multilayer film deposition technology has the advantages in controlling the line width size and edge characteristics.   Conclusions  The SEM value traceability and magnification calibration method based on the self-traceable grating standard material shortens the traceability chain, reduces the traceability error introduced in the process of value traceability, improves the accuracy and reliability of SEM measurement, and provides a possibility for the flattening of the value transfer gradually. Through the measurement and analysis of line edge roughness and line width roughness, accurate characterization of line width and edge characteristics is achieved, and metrological support is provided for high-precision nanoscale measurement and microelectronics manufacturing fields.
Research on jitter compensation algorithm in spectral confocal thickness measurement system
Li Chunyan, Li Danlin, Liu Jihong, Liu Chang, Li Ke, Jiang Jiewei
2024, 53(1): 20230444. doi: 10.3788/IRLA20230444
[Abstract](86) [FullText HTML] (17) [PDF 4377KB](26)
In order to obtain multi-point data of the sample, the spectral confocal displacement sensing system will produce jitter effect when moving the measurement, causing the drift of the measurement data. Based on the realized spectral confocal thickness measurement system, the effect of jitter is studied and the jitter compensation algorithm is explored; Firstly, based on the spectral confocal thickness measurement model and the presence of jitter when the probe is tilted to a certain extent relative to the optical axis, the relational model of the effect of jitter on the thickness measurement is deduced, and the thicknesses of four kinds of samples with different degrees of random jitter are analyzed by Monte Carlo simulation method. The analyzed results are compared with the Monte Carlo simulation results to verify the correctness of the expression of the thickness probability density function. The results show that the jitter effect leads to a degradation of the measurement performance, especially when the sample thickness is large; In the case of a large standard deviation of jitter, the measurement of thinner samples has a better anti-jitter performance. Then, in order to compensate the effect of jitter on the measurement results, it is proposed to use Savitzky-Golay filtering and Gaussian fitting to realize the filtering and the extraction of the peak wavelength of the spectral signal, and the jitter error compensation algorithm is established; Finally, experimental measurements were conducted on a sample with a thickness of (1.0±0.1) mm, and the average thickness was measured to be 1.064 0 mm. The compensated relative standard deviation was 0.29%, verifying the effectiveness of the jitter compensation algorithm. This research has some guiding significance to improve the system measurement stability and measurement accuracy.  Objective  With the development of miniature integrated optical instruments such as optical communications and optical sensing, the use of transparent materials is becoming more and more stringent. Highly accurate thickness measurement parameters help guide their precise application and control the performance of related ultra-precision optical instruments, making accurate thickness inspection necessary. Spectral confocal method uses a broad-spectrum light source to irradiate the surface of the object, uses the principle of optical dispersion to make the dispersive objective lens produce axial chromatic aberration, establishes the correspondence between the dispersive distance and the wavelength, and uses a spectrometer to detect the peak wavelength of the spectrum that is focused on the surface of the object and reflected back to get the accurate axial position or micro-displacement data. Such an approach allows to break through the diffraction limit of ordinary optical microscopes. It achieves ultra-high ranging resolution on the nanometer scale and has wide adaptability to environments and materials. When measuring the thickness of transparent materials using the spectral confocal method, the jitter effect alters the refractive properties of the beam entering the sample and random noise is present in the received spectral response curve reflected from the sample surface, which leads to drift of the measurement data. On this basis, the relationship model of the effect of jitter on the spectral confocal thickness measurement is firstly derived in this paper, and the distribution of the thickness probability density function of the sample under different degrees of random jitter is simulated and analyzed by Monte Carlo method. In order to compensate for the effect of jitter on the measurement results, it was proposed in this paper to use Savitzky-Golay filtering and Gaussian fitting to extract the peak wavelength of the spectral signal, and a jitter compensation algorithm was established. Finally, the stability of the measurement results is improved by experimental measurements, and the effectiveness of the algorithm is verified.   Methods  The effect of jitter on thickness measurement of transparent materials is studied in this paper. First, the thickness measurement models of the probe with respect to the optical axis were derived when the probe was not tilted (Fig.2) and tilted (Fig.3), and the influence of jitter on the thickness measurement results was characterized by the optical axis tilt, and simulation analysis was carried out (Fig.4). Then, by comparing thickness measurement errors and jitter standard deviation at different wavelengths (Fig.5), after comparative analysis of various algorithms, spectral noise was filtered by Savitzky-Golay filtering algorithm (Fig.7-8), and peak wavelength of spectral signal was extracted by Gaussian fitting algorithm (Tab.1). An optimized jitter compensation algorithm is constructed. Finally, the validity of Savitzky-Golay filtering algorithm and Gaussian fitting algorithm for jitter compensation in spectral confocal thickness measurement is verified (Tab.2).   Results and Discussions   The thickness of the sample under static conditions depends only on the focusing wavelength, the angle of incidence and the refractive index of the transparent material. Random jitter angle is the main source of thickness measurement error, and the thickness measurement error caused by sensor probe jitter should not be neglected. By analyzing the effect of random jitter angle on the measurement error, a jitter compensation mechanism is established to reduce the measurement error. Thickness measurement data is non-central cardinality distribution, the jitter effect will lead to the measurement performance degradation, especially in the case of larger sample thickness; In the case of larger jitter standard deviation, thinner samples have better anti-jitter performance. The peak wavelength of the spectral signal is extracted by S-G filtering and Gaussian fitting, which can reduce the error caused by the mechanical vibration of the probe and improve the measurement stability of the confocal spectroscopy measurement system.   Conclusions  In this paper, based on the spectral confocal method to realize the thickness measurement of transparent materials, the jitter effect generated by the movement will make the measurement data drift, and the influence of the jitter effect on the thickness measurement is systematically studied. Firstly, the relationship model of the effect of jitter on thickness measurement is established based on the principle of spectral confocal thickness measurement system, theoretical derivation is carried out, and MC simulation is used for simulation verification. Secondly, the thickness PDF and MC simulation results are compared and analyzed to verify the correctness of the thickness PDF expression. The results show that the jitter effect leads to a degradation of the measurement performance, especially when the sample thickness is large; In the case of a large standard deviation of jitter, the measurement of thinner samples has a better anti-jitter performance. In order to correct or compensate the effect of jitter on the measurement results, it is proposed to use S-G filtering and Gaussian fitting to realize the random noise filtering and the extraction of the peak wavelength of the spectral signal, and the jitter error compensation algorithm is modeled. Finally, experimental measurements were conducted on a sample with a thickness of (1.0±0.1) mm. Under stable conditions, the average thickness was measured to be 1.064 0 mm. The relative standard deviation of the moving measurement results was reduced from 1.86% before compensation to 0.29% after compensation, verifying the effectiveness of the jitter error compensation algorithm proposed in this paper and proposing improvement measures. The results of this paper have certain guiding significance for optimizing the system structure and further improving the performance of the system, and have certain advancing effect on the practical application of the spectral confocal displacement sensing system for stable measurement.
High dynamic range surface measurement method based on adaptive multi-exposure fusion
Lei Jingfa, Xie Haoran, Li Yongling, Wu Dong, Zhang Miao, Zhao Ruhai
2024, 53(1): 20230370. doi: 10.3788/IRLA20230370
[Abstract](134) [FullText HTML] (49) [PDF 5173KB](45)
  Objective   For low dynamic range surface objects, a single exposure can provide sufficient exposure, but for high dynamic range surface objects, it is difficult to obtain high-quality fringe patterns with a single exposure, the multi-exposure fusion technology fuses the fringe pattern through multiple exposures, which can effectively improve the definition of the fringe pattern, thereby improving the accuracy of phase measurement. The traditional multi-exposure fusion technology needs to manually set the exposure time, which has problems such as low efficiency and poor exposure accuracy, in this paper, the method of adaptive exposure is used to obtain the exposure time, which avoids the disadvantages of manual exposure. Although the fringe image fused by traditional multi-exposure fusion technology has removed overexposure points, the overall quality of the fringe image is still not high, therefore, this paper improves the fusion process of multi-exposure images, a fringe map with better image quality is obtained.   Methods   Firstly, the images taken under the initial exposure time are analyzed by histogram, the areas with different reflectance on the surface of the measured object are divided into several groups, the optimal exposure time of each group is calculated respectively; on this basis, the images of projected white light and projected stripes were taken under the optimal exposure time corresponding to different groups, after removing the high gray value area exceeding the set threshold in the image, then making the image collected when projecting white light into a mask image, and multiplied with the image acquired when the fringes were projected at the same exposure time, perform brightness compression and fusion on multi-group multiplied images; finally, improve the contrast and clarity of the fringe image generated after fusion through the CLAHE algorithm, thereby performing stripe unwrapping and point cloud reconstruction.   Results and Discussions   The adaptive exposure used in this paper is more efficient and accurate compared to manual exposure (Fig.6), for the three high dynamic range surface objects of the U-card, the Connection Bock and the Disc, the fringe image fused by the method in this paper has no overly bright and overly dark areas, and the overall quality of the fringe pattern is good (Fig.7, Fig.8, Fig.9), there is no obvious loss in the point cloud\image after 3D reconstruction (Fig.10, Fig.11, Fig.12), the number of point clouds measured by the method in this paper is similar to the number of three-dimensional point clouds measured by the spray imaging agent method, the reconstruction rate has reached more than 99.5% (Fig.13), the measured absolute error and relative error of the standard block step height are only 0.062 mm and 0.69% (Tab.2).   Conclusions   Aiming at the failure of 3D contour detection of high dynamic range surface objects, this paper proposes an improved multi-exposure fusion method, replacing manually setting exposure with adaptive exposure, at the same time, in the process of image fusion, the contrast and clarity of the fringe image are improved by setting the threshold, brightness compression, and using the CLAHE algorithm. Experimental results show that adaptive exposure is more efficient and accurate than manually setting exposure, the point cloud reconstruction rate of different high dynamic range surface objects is above 99.75%, the measured absolute error and relative error of the standard block step height are only 0.062 mm and 0.69%, effectively solved the problem of missing 3D point clouds for detecting high dynamic range surface objects, improved measurement efficiency and accuracy of 3D profile measurement.
Comparison and analysis of the overall architecture of foreign EO targeting pod detection system
Li Jian, Zhang Dayong
2024, 53(1): 20230353. doi: 10.3788/IRLA20230353
[Abstract](149) [FullText HTML] (29) [PDF 2228KB](46)
  Significance   The technical characteristics and development process of four generations of EO (electro-optical) targeting pod are compared. The design focus and key points of each generation are summarized. Focusing on the overall design of the detection system of AN/AAQ-33 Sniper XR ATP pod and AN/ASQ-228 ATFLIR pod, as well as the newly emerging fourth generation products such as Litening 5 and Talios pods, this paper provides a reference for the development of the new generation EO targeting pod detection system.   Progress  Firstly, the ratio of optical aperture to pod diameter (ROP) is defined as a standard for measuring the integration level of optical machinery and servo control system. The higher the ROP is, the higher the system integration degree is.   Secondly, the optical system of ATP and ATFLIR is analyzed, which are regarded as the typical 3rd generation targeting pod, both adopt the series common optical path architecture of front telescope system and servo frame platform are placed at the head of the pod. And the compressed parallel beams are introduced into the beam splitter and rear detection/laser emission system which are placed at the middle of the pod through the optical hinge and fast steering mirror (FSM). The two pods' small field of view (sFOV) is about 1.5°×1.5°, and the wavelength is 0.7-0.9 μm & 3.7-4.8 μm, and their modulation of transfer function (MTF) are close to diffraction limit. A refractive front telescope system like ATP with φ150 mm common optical path is forward designed, and the result verifies the optical system considering the servo frame platform can be installed into a pod of φ305 mm diameter, and the ROP is 0.492; A off-axis three mirror astigmatism (TMA) front telescope system like ATFLIR with φ150 mm common optical path is forward designed, and the result verifies the optical system can be installed into a pod of φ330 mm diameter, and the ROP is 0.455 (Fig.4, Fig.7).   Finally, as a comparison, the optical parameters of Litening 5 and Talios (Fig.9-10) are introduced, which are regarded as the fourth generation targeting pods. The detection systems adopt a parallel common cabin layout, with all the optical payload and servo frame platform installed inside a sphere with φ406 mm diameter, and their largest optical apertures are still φ150 mm. The ROP of Litening 5 and Talios is 0.37 and 0.38, much lower than ATP and ATFLIR, indicating low integration levels. Litening 5 pod addes a shortwave infrared imaging band, which has high fog penetration ability; Talios pod addes the step-and-stare scan imaging ability; Both of them add visible light color imaging function to improve detection and recognition probability. However, based on the optical design analysis, ATP and ATFLIR can easily modify their optical systems to achieve these functions, indicating the two pods have strong vitality due to their forward-looking overall architectures, and the improvement of ATFLIR optics is much easier than that of ATP.   Conclusions and Prospects  In the future, the targeting pod needs to integrate functions such as air-to-air detection, laser communication, and directional infrared countermeasures (DIRCM), and must have high functional density. The series layout architecture using a pure reflection common path front telescope system, optical hinges, FSM, rear detection/laser emission system has strong scalability and expansibility.
Field prototype for rapid classification of suspended particles in water based on polarized light scattering and fluorescence measurement
Xiong Zhihang, Mai Haoji, Huang Zhuangfan, Li Jingteng, Sun Peitao, Wang Jialin, Xie Yongtao, He Zixi, Zeng Yaguang, Wang Hongjian, Guo Zhiming, Liao Ran, Ma Hui
2023, 52(9): 20230030. doi: 10.3788/IRLA20230030
[Abstract](124) [FullText HTML] (26) [PDF 2621KB](26)
  Objective  Suspended particles in water include solid or liquid particles, such as sediment, microplastics, and microalgae. Accurate monitoring of their categories and concentration is of great scientific and practical significance for studying and protecting aquatic ecosystems. Various optical instruments have been developed to probe suspended particles in water, which can be divided into two categories based on the measurement methods. One category measures the overall characteristics of all particles in a body of water, while the other measures individual particles. Water Quality Analyzer (QWA) provide estimates of particle concentration and size distribution, chlorophyll-a concentration, and other water quality parameters. However, QWA are limited in their ability to accurately identify the categories of suspended particles in water. Underwater flow cytometry enables the characterization of various categories of particles by breaking up a water sample into individual particles that are then to be measured. However, this technique is expensive and requires complex sample pretreatment, which limits its application. Therefore, it is needed to develop a prototype for field detection of water samples collected in the wild, with the goal of quickly determining the categories, numbers, and proportions of suspended particles in water.  Methods  Suspended Particle Classifier (SPC) has been developed in this paper and its diagram is depicted (Fig.1). The SPC employs a 445 nm laser as the excitation source to induce chlorophyll fluorescence, and the polarization state of the laser is modulated by a polarization state generator. The SPC obtains individual particle polarized light scattering and fluorescence signals, which are combined with a Support Vector Machine (SVM) to classify particles based on their optical properties. To ensure its suitability for field use, the SPC is equipped with a drainage tube for the transportation of water samples and an industrial computer for instrument control and data analysis. Standard samples of sediments, microplastics, and microalgae are collected. Then, datasets are created to train the SVM classifier. Subsequently, SPC was deployed alongside QWA in the Yamen Waterway for 25 hours to evaluate its performance (Fig.3). The accuracy of the SPC classification was verified using data obtained from the QWA.  Results and Discussions  The SPC's classification accuracy for standard samples of sediment, microplastics, and microalgae was found to be 95.3%, 93.3%, and 97.9% (Fig.4), respectively, indicating that the classifier has good performance in classifying these particles. The average accuracy and recall rate were found to be 95.5% (Tab.1), indicating the SVM model has strong feature extraction ability. These results suggest that the SPC can accurately classify standard samples. When applied in the Yamen Waterway, the SPC was able to rapidly measure water samples collected in the field and track the changes in the number of sediments, microplastic, and microalgae in different water layers over time (Fig.5). Furthermore, the number of microalgae identified by the SPC was found to have a strong correlation with the concentration of chlorophyll-a and phycoerythrin measured by the QWA (Fig.6, Tab.2). Additionally, the so-called effective time cross-section of sediments identified by the SPC was found to have a strong correlation with the turbidity value measured by the QWA (Fig.6, Tab.2), further validating the reliability of the SPC's classification results.  Conclusions  In this study, a suspended particle classifier was developed with the aim of classifying and counting suspended particles in water samples collected in the field. The SPC probes polarized light scattering and fluorescence signals from individual suspended particles and uses SVM to classify them based on their optical properties. The classification accuracy for standard samples of sediment, microplastics, and microalgae was over 95%. To validate the SPC's classification ability for field water samples, the SPC and QWA were deployed in the Yamen Waterway for 25 hours of synchronous testing. The SPC was able to track changes in the number of sediment, microplastic, and microalgae in different water layers over time. There was a strong correlation between the SPC and QWA measurement data, indicating the high reliability of the SPC in classifying particles in field water samples. These results demonstrate that the SPC can rapidly detect and classify suspended particles in water and has the potential to be a valuable tool for exploring aquatic ecosystems.
Observation experiment on star-light deflection of star-points under high-speed mixing flow
Chen Bing, Chen Shaojie, Chen Xiao, Li Chonghui, Zheng Yong
2023, 52(9): 20220802. doi: 10.3788/IRLA20220802
[Abstract](50) [FullText HTML] (14) [PDF 3290KB](17)
  Objective  Celestial navigation is an important method of autonomous navigation. Astronomical observation of high-speed aircraft will be disturbed inevitably by the high-speed flow nearby the observation window, which causes the star maps degradation like displacement and blurring. And this will lead to a decrease in the accuracy of the stars center, which will have a direct effect on the accuracy of astronomical attitude determination. At present, most studies on the calculation and correction of star map degradation are based on computer simulation, whose results are greatly affected by the configuration of model parameters and may not be consistent with the real physical process. Therefore, it is necessary to construct the physical experimental observation conditions of the influence of high-speed flow on star-light deflection and to carry out experimental research.   Methods  A small static wind tunnel is built, which can generate a Mach 2.5/3.5 mixing layer structure in the test section. The calibrated simulated star-points on the indoor dome with a diameter of 10 m are measured through the high-speed flow, and the star centroids are extracted to collect the data of imaging displacements by the real flow. The data of star image disturbed by the flow field are obtained and compared with the computer simulation results.   Results and Discussions   The deflection by flow is greater than the estimated value of computer simulations. At the near end of the tunnel nozzle, the high-speed mixing layer makes a large star-light deflection. The mean deflection in the vertical direction of the flow field is less than 0.5″, and that in the direction of the flow field is 3.85″, and the maximum is close to 4.89″ (Fig.8). At the far end, the mean deflection in these two direction is −1.36″ and −0.49″ respectively (Fig.9). The variation of starlignt deflection at the near end is smaller and more stable than that at the far end, which is conducive for modeling correction (Fig.10).   Conclusions  A star-points observation system under the high-speed flow was constructed based on the indoor dome, and a Mach 2.5/3.5 mixed high-speed flow field was generated in the experimental observation section. The target star-points were observed from different observation positions, and the quantitative conclusion of the high-speed flow on star-points imaging disturbance was obtained for the first time by physical observation experiment.   The results show that: 1) Star-light deflection is mainly concentrated in streamwise. This result is consistent with the expectation of the theoretical analysis; 2) The star-light deflection caused by the flow field at the near end of the nozzle is larger than that at the far end, but the variation range is smaller and more stable than that at the far end, which is conducive for modeling correction; 3) The absolute value of target starlight deflection caused by high-speed mixed flow is greater than the simulation result at both near and far end of the tunnel nozzle. The current work has proved the stability and effectiveness of the experimental system, which can provide an experimental basis to form a systematic understanding of the influence of flow structure on navigation starlight acquisition by the subsequent systematic observation under different altitude angles and azimuth angles, and provide experimental data of physical observation for simulation modeling. Then, a modified model of the influence of high-speed flow fields with different structures on starlight could be established, which may provide theoretical support for the suppression of aerodynamic influence and the deflection correction of air-cooled film in the astronomical observation of hypersonic vehicles.
Research on harmonic detection pressure inversion based on Gauss/Lorentz line fitting ratio (invited)
Gao Nan, Yu Yongbo, Du Zhenhui, Li Jinyi, Meng Zhaozong, Zhang Zonghua
2023, 52(8): 20230428. doi: 10.3788/IRLA20230428
[Abstract](73) [FullText HTML] (15) [PDF 1985KB](26)
  Objective  Tuned laser absorption spectroscopy (TLAS) technology has advantages such as non-contact, anti-interference, and high sensitivity, which can be used for gas concentration, temperature, and pressure measurement. In the existing pressure detection models, limited feature points of spectral lines are mostly extracted and calculated, which can lead to problems such as susceptibility to interference in measurement results and significant measurement errors. Therefore, it is necessary to establish a new anti-interference and stable pressure detection model. To solve this problem, a mathematical model was proposed for fitting the pressure and spectral line shape function within the low and high pressure ranges based on the gas pressure measurement method of absorption line width.   Methods  Simulation research on the second harmonic absorption lines under different pressures was conducted based on the principle of spectral line broadening. In order to simulate the pressure changes by adjusting the Gauss/Lorentz halfwidth ratio, the second-order derivative signal was obtained by convolution of Gauss and Lorentz functions to simulate the second harmonic of the absorption spectral line. By establishing a mathematical model of the Gauss/Lorentz line fitting ratio and pressure, the fitting relationship between the two was obtained under ideal conditions and the influence of laser line width, white noise, and background interference. The comparative analysis on the stability of the fitting ratio with eigenvalues used to calculate pressure in existing models such as the peak width and 2f/4f amplitude under dynamic noise and background interference were conducted. Finally, the measured signal at 1 580 nm of CO2 gas was processed to verify the simulation results.   Results and Discussions   The simulation results show that under ideal conditions and the influence of laser linewidth, white noise, and background interference, there is a third-order fitting relationship between the Gauss/Lorentz line fitting ratio and pressure, and the fitting degree remains above 0.998 0 (Fig.3-6). Compared with traditional models, it has better stability under dynamic noise and background interference (Tab.1). The experimental results show that the third-order fit between the Gauss/Lorentz line fitting ratio of the actual detection spectral line and the pressure is 0.986 3 (Fig.9), slightly lower than the simulated fit of 0.998 7 (Fig.2), which is consistent with the simulation analysis results.   Conclusions  In order to establish a more effective pressure detection method, based on the principle of spectral line broadening, the pressure change is simulated using the ratio of Gauss function half width to Lorentz function half width, and the Voigt function is used to describe the absorption spectral line shape. A mathematical model was established for the pressure to fitting ratio under ideal conditions, laser linewidth, white noise, and background interference. Through simulation analysis, the relationship between pressure and fitting ratio satisfies a third-order fitting relationship, which is not only affected by laser linewidth, white noise, and background interference, but also maintain stability under dynamic noise and background interference, which exhibits advantages in pressure detection compared to traditional models. The experimental validation was carried out using CO2 absorption spectra. The curve fitting obtained from analyzing the experimental data was slightly lower, but its trend was consistent, which indicated the effectiveness of the established mathematical model. The proposed method has certain theoretical significance and practical value in pressure measurement, providing new ideas for pressure detection.
  • First
  • Prev
  • 1
  • 2
  • 3
  • 4
  • 5
  • Last
  • Total:37
  • To
  • Go